34 research outputs found

    A rule of thumb for the economic capital of a large credit portfolio

    Get PDF
    We derive approximate formulae for the credit value-at-risk and the economic capital of a large credit portfolio. The representation allows to change the risk horizon quickly and avoids simulation or numerical procedures. The Poisson mixture model is equivalent to CreditRisk and uses the same parameters. --

    A Bootstrap Test for the Comparison of Nonlinear Time Series - with Application to Interest Rate Modelling

    Get PDF
    We study the drift of stationary diffusion processes in a time series analysis of the autoregression function. A marked empirical process measures the difference between the nonparametric regression functions of two time series. We bootstrap the distribution of a Kolmogorov-Smirnov-type test statistic for two hypotheses: Equality of regression functions and shifted regression functions. Neither markovian behavior nor Brownian motion error of the processes are assumed. A detailed simulation study finds the size of the new test near the nominal level and a good power for a variety of parametric models. The two-sample result serves to test for mean reversion of the diffusion drift in several examples. The interest rates Euribor, Libor as well as T-Bond yields do not show that stylized feature often modelled for interest rates. --

    A likelihood ratio test for stationarity of rating transitions

    Get PDF
    For a time-continuous discrete-state Markov process as model for rating transitions, we study the time-stationarity by means of a likelihood ratio test. For multiple Markov process data from a multiplicative intensity model, maximum likelihood parameter estimates can be represented as martingale transform of the processes counting transitions between the rating states. As a consequence, the profile partial likelihood ratio is asymptotically X-2-distributed. An internal rating data set reveals highly significant instationarity. --Stationarity,Multiple Markov process,Counting process,Likelihood ratio,Panel data

    Strong consistency for delta sequence ratios

    Get PDF
    Almost sure convergence for ratios of delta functions establishes global and local strong consistency for a variety of estimates and data generations. For instance, the empirical probability function from independent identically distributed random vectors, the empirical distribution for univariate independent identically distributed observations, and the kernel hazard rate estimate for right-censored and left-truncated data are covered. The convergence rates derive from the Bennett-Hoeffding inequality. --kernel smoothing,hazard rate,left-truncation,right-censoring,empirical process

    Kolmogorov-Smirnov-type testing for the partial homogeneity of Markov processes - with application to credit risk.

    Get PDF
    In banking the default behavior of the counterpart is of interest not only for the pricing of transactions under credit risk but also for the assessment of portfolio credit risk. We develop a test against the hypothesis that default intensities are constant over time within a homogeneous group of counterparts under investigation, e.g. a rating class. The Kolmogorov-Smirnov-type test builds on the asymptotic normality of counting processes in event history analysis. Right-censoring accommodates for Markov process with more than one no-absorbing state. A simulation study and an example of rating migrations support the usefulness of the test. --

    A rule-of-thumb for the variable bandwidth selection in kernel hazard rate estimation

    Get PDF
    In nonparametric curve estimation the decision about the type of smoothing parameter is critical for the practical performance. The nearest neighbor bandwidth as introduced by Gefeller and Dette 1992 for censored data in survival analysis is specified by one parameter, namely the number of nearest neighbors. Bandwidth selection in this setting is rarely investigated although not linked closely to the frequently studied fixed bandwidth. We introduce a selection algorithm in the hazard rate estimation context. The approach uses a newly developed link to the fixed bandwidth which identifies the variable bandwidth as additional smoothing step. The procedure gains further data-adaption after fixed bandwidth smoothing. Assessment by a Monte Carlo simulation and a clinical example demonstrate the practical relevance of the findings. --

    Testing large-dimensional correlation

    Get PDF
    This paper introduces a test for zero correlation in situations where the correlation matrix is large compared to the sample size. The test statistic is the sum of the squared correlation coefficients in the sample. We derive its limiting null distribution as the number of variables as well as the sample size converge to infinity. A Monte Carlo simulation finds both size and power for finite samples to be suitable. We apply the test to the vector of default rates, a risk factor in portfolio credit risk, in different sectors of the German economy. --testing correlation,n-p-asymptotics,portfolio credit risk

    Bias in nearest-neighbor hazard estimation

    Get PDF
    In nonparametric curve estimation, the smoothing parameter is critical for performance. In order to estimate the hazard rate, we compare nearest neighbor selectors that minimize the quadratic, the Kullback-Leibler, and the uniform loss. These measures result in a rule of thumb, a cross-validation, and a plug-in selector. A Monte Carlo simulation within the three-parameter exponentiated Weibull distribution indicates that a counter-factual normal distribution, as an input to the selector, does provide a good rule of thumb. If bias is the main concern, minimizing the uniform loss yields the best results, but at the cost of very high variability. Cross-validation has a similar bias to the rule of thumb, but also with high variability. --hazard rate,kernel smoothing,bandwidth selection,nearest neighbor bandwidth,rule of thumb,plug-in,cross-validation,credit risk

    The cost for the default of a loan : Linking theory and practice

    Get PDF
    When calculating the cost of entering into a credit transaction the predominant stochastic component is the expected loss. Often in the credit business the one-year probability of default of the liable counterpart is the only reliable parameter. We use this probability to calculating the exact expected loss of trades with multiple cash ows. Assuming a constant hazard rate for the default time of the liable counterpart we show that the methodology used in practice is a linear Taylor approximation of our exact calculus. In a second stage we can generalize the calculation to arbitrary hazard rates for which we prove statistical evidence and develop an estimate from historical data. --

    The Yield of Ten-Year T-Bonds: Stumbling Towards a 'Good' Forecast

    Get PDF
    Due to their status as "the" benchmark yield for the world's largest government bond market and its importance for US monetary policy, the interest in a "good" forecast of the constant maturity yield of the 10-year U.S. Treasury bond ("T-bond yields") is immense. This paper assesses three univariate time series models for forecasting the yield of T-bonds: It shows that a simple SETAR model proves to be superior to the random walk and an ARMA model. However, dividing the sample of bond yields, dating from 1962 to 2005, into a training sample and a test sample reveals the forecast to be biased. A new bias-corrected version is developed and forecasts for March 2005 to February 2006 are presented. In addition to point estimates forecast limits are also given. --T-bond,times series,10-year yield,TAR model,bias-correction,non-linear time series
    corecore